12 research outputs found

    Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing

    Full text link
    The accuracy of Automated Speech Recognition (ASR) technology has improved, but it is still imperfect in many settings. Researchers who evaluate ASR performance often focus on improving the Word Error Rate (WER) metric, but WER has been found to have little correlation with human-subject performance on many applications. We propose a new captioning-focused evaluation metric that better predicts the impact of ASR recognition errors on the usability of automatically generated captions for people who are Deaf or Hard of Hearing (DHH). Through a user study with 30 DHH users, we compared our new metric with the traditional WER metric on a caption usability evaluation task. In a side-by-side comparison of pairs of ASR text output (with identical WER), the texts preferred by our new metric were preferred by DHH participants. Further, our metric had significantly higher correlation with DHH participants' subjective scores on the usability of a caption, as compared to the correlation between WER metric and participant subjective scores. This new metric could be used to select ASR systems for captioning applications, and it may be a better metric for ASR researchers to consider when optimizing ASR systems.Comment: 10 pages, 8 figures, published in ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '17

    Frame Theory for Signal Processing in Psychoacoustics

    Full text link
    This review chapter aims to strengthen the link between frame theory and signal processing tasks in psychoacoustics. On the one side, the basic concepts of frame theory are presented and some proofs are provided to explain those concepts in some detail. The goal is to reveal to hearing scientists how this mathematical theory could be relevant for their research. In particular, we focus on frame theory in a filter bank approach, which is probably the most relevant view-point for audio signal processing. On the other side, basic psychoacoustic concepts are presented to stimulate mathematicians to apply their knowledge in this field

    Effects of Long-Term Ageing on Speaker Verification

    No full text
    Abstract. The changes that occur in the human voice due to ageing have been well documented. The impact of these changes on speaker verification is less clear. In this work, we examine the effect of long-term vocal ageing on a speaker verification system. On a cohort of 13 adult speakers, using a conventional GMM-UBM system, we carry out longitudinal testing of each speaker across a time span of 30-40 years. We uncover a progressive degradation in verification score as the time span between the training and test material increases. The addition of temporal information to the features causes the rate of degradation to increase. No significant difference was found between MFCC and PLP features. Subsequent experiments show that the effect of short-term ageing (<5 years) is not significant compared with normal inter-session variability. Above this time span however, ageing has a detrimental effect on verification. Finally, we show that the age of the speaker at the time of training influences the rate at which the verification scores degrade. Our results suggest that the verification score drop-off accelerates for speakers over the age of 60. The results presented are the first of their kind to quantify the effect of long-term vocal ageing on speaker verification.

    A Web-oriented Java 3D Talking Head

    No full text
    Facial animation denotes all those systems performing speech synchro- nization with an animated face model. These kinds of systems are named Talking Heads or Talking Faces. At the same time simple dialogue systems called chatbots have been developed. Chatbots are software agents able to interact with users through pattern-matching based rules. In this paper a Talking Head oriented to the creation of a Chatbot is presented. An answer is generated in form of text trig- gered by an input query. The answer is converted into a facial animation using a 3D face model whose lips movements are synchronized with the sound produced by a speech synthesis module. Our Talking Head exploits the naturalness of the facial animation and provides a real-time interactive interface to the user. Besides, it is specifically suited for being used on the web. This leads to a set of require- ments to be satisfied, like: simple installation, visual quality, fast download, and interactivity in real time. The web infrastructure has been realized using the Cli- ent-Server model. The Chatbot, the Natural Language Processing and the Digital Signal Processing services are delegated to the server. The client is involved in an- imation and synchronization. This way, the server can handle multiple requests from clients. The conversation module has been implemented using the A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) technology. The output of the chatbot is given input to the Natural Language Processing (Comedia Speech), in- corporating a text analyzer, a letter-to-sound module and a module for the genera- tion of prosody. The client, through the synchronization module, computes the time of real duration of the animation and the duration of each phoneme and con- sequently of each viseme. The morphing module performs the animation of the fa- cial model and the voice reproduction. As a result, the user will see the answer to question both in textual form and in the form of visual animation

    Combination of Similarity Measures for Effective Spoken Document Retrieval

    No full text
    Often users of information retrieval systems and document authors use different terms to refer to the same concept. For this simple reason, information retrieval is affected by the 'term mismatch' problem. The term mismatch problem does not only have the effect of hindering the retrieval of relevant documents, it also produces bad rankings of relevant documents. A similar problem can be found in spoken document retrieval, where terms misrecognized by the speech recognition process can hinder the retrieval of potentially relevant spoken documents. We will call this problem 'term misrecognition', by analogy to the term mismatch problem. This paper presents two classes of retrieval models that attempt to tackle both the term mismatch and the term misrecognition problems at retrieval time using term similarity information. The models use either complete or partial knowledge of semantic and phonetic term similarity, evaluated using statistical methods from the corpus
    corecore